Picture for Yassine Benajiba

Yassine Benajiba

Towards Long Context Hallucination Detection

Add code
Apr 28, 2025
Viaarxiv icon

MemInsight: Autonomous Memory Augmentation for LLM Agents

Add code
Mar 27, 2025
Viaarxiv icon

A Study on Leveraging Search and Self-Feedback for Agent Reasoning

Add code
Feb 17, 2025
Viaarxiv icon

TReMu: Towards Neuro-Symbolic Temporal Reasoning for LLM-Agents with Memory in Multi-Session Dialogues

Add code
Feb 03, 2025
Viaarxiv icon

Self-supervised Analogical Learning using Language Models

Add code
Feb 03, 2025
Viaarxiv icon

DiverseAgentEntropy: Quantifying Black-Box LLM Uncertainty through Diverse Perspectives and Multi-Agent Interaction

Add code
Dec 12, 2024
Figure 1 for DiverseAgentEntropy: Quantifying Black-Box LLM Uncertainty through Diverse Perspectives and Multi-Agent Interaction
Figure 2 for DiverseAgentEntropy: Quantifying Black-Box LLM Uncertainty through Diverse Perspectives and Multi-Agent Interaction
Figure 3 for DiverseAgentEntropy: Quantifying Black-Box LLM Uncertainty through Diverse Perspectives and Multi-Agent Interaction
Figure 4 for DiverseAgentEntropy: Quantifying Black-Box LLM Uncertainty through Diverse Perspectives and Multi-Agent Interaction
Viaarxiv icon

Inference time LLM alignment in single and multidomain preference spectrum

Add code
Oct 24, 2024
Figure 1 for Inference time LLM alignment in single and multidomain preference spectrum
Figure 2 for Inference time LLM alignment in single and multidomain preference spectrum
Figure 3 for Inference time LLM alignment in single and multidomain preference spectrum
Figure 4 for Inference time LLM alignment in single and multidomain preference spectrum
Viaarxiv icon

Open Domain Question Answering with Conflicting Contexts

Add code
Oct 16, 2024
Figure 1 for Open Domain Question Answering with Conflicting Contexts
Figure 2 for Open Domain Question Answering with Conflicting Contexts
Figure 3 for Open Domain Question Answering with Conflicting Contexts
Figure 4 for Open Domain Question Answering with Conflicting Contexts
Viaarxiv icon

Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models

Add code
Oct 11, 2024
Figure 1 for Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models
Figure 2 for Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models
Figure 3 for Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models
Figure 4 for Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models
Viaarxiv icon

Active Evaluation Acquisition for Efficient LLM Benchmarking

Add code
Oct 08, 2024
Viaarxiv icon